机器学习(ML)研究通常集中在模型上,而最突出的数据集已用于日常的ML任务,而不考虑这些数据集对基本问题的广度,困难和忠诚。忽略数据集的基本重要性已引起了重大问题,该问题涉及现实世界中的数据级联以及数据集驱动标准的模型质量饱和,并阻碍了研究的增长。为了解决此问题,我们提出Dataperf,这是用于评估ML数据集和数据集工作算法的基准软件包。我们打算启用“数据棘轮”,其中培训集将有助于评估相同问题的测试集,反之亦然。这种反馈驱动的策略将产生一个良性的循环,该循环将加速以数据为中心的AI。MLCommons协会将维护Dataperf。
translated by 谷歌翻译
可解释性正在成为一个活跃的研究主题,因为机器学习(ML)模型更广泛地用于做出关键决策。表格数据是不同应用中最常用的数据模式之一,如医疗保健和金融。用于表格数据的大部分现有的解释性方法仅报告功能 - 重要性分数 - 或者每个示例)或全局(每种型号) - 但它们不提供特征如何交互的解释或可视化。我们通过引入特征向量来解决此限制,这是一种为表格数据集设计的新的全局解释性方法。除了提供功能重要性之外,特征向量通过直观的特征可视化技术发现特征之间的固有语义关系。我们的系统实验通过将其应用于几个现实世界数据集来证明这种新方法的经验效用。我们还提供了一种用于特征向量的易于使用的Python包。
translated by 谷歌翻译
As data becomes the fuel driving technological and economic growth, a fundamental challenge is how to quantify the value of data in algorithmic predictions and decisions. For example, in healthcare and consumer markets, it has been suggested that individuals should be compensated for the data that they generate, but it is not clear what is an equitable valuation for individual data. In this work, we develop a principled framework to address data valuation in the context of supervised machine learning. Given a learning algorithm trained on n data points to produce a predictor, we propose data Shapley as a metric to quantify the value of each training datum to the predictor performance. Data shapley value uniquely satisfies several natural properties of equitable data valuation. We develop Monte Carlo and gradient-based methods to efficiently estimate data Shapley values in practical settings where complex learning algorithms, including neural networks, are trained on large datasets. In addition to being equitable, extensive experiments across biomedical, image and synthetic data demonstrate that data Shapley has several other benefits: 1) it is more powerful than the popular leave-one-out or leverage score in providing insight on what data is more valuable for a given learning task; 2) low Shapley value data effectively capture outliers and corruptions; 3) high Shapley value data inform what type of new data to acquire to improve the predictor.
translated by 谷歌翻译
In order for machine learning to be trusted in many applications, it is critical to be able to reliably explain why the machine learning algorithm makes certain predictions. For this reason, a variety of methods have been developed recently to interpret neural network predictions by providing, for example, feature importance maps. For both scientific robustness and security reasons, it is important to know to what extent can the interpretations be altered by small systematic perturbations to the input data, which might be generated by adversaries or by measurement biases. In this paper, we demonstrate how to generate adversarial perturbations that produce perceptively indistinguishable inputs that are assigned the same predicted label, yet have very different interpretations. We systematically characterize the robustness of interpretations generated by several widely-used feature importance interpretation methods (feature importance maps, integrated gradients, and DeepLIFT) on ImageNet and CIFAR-10. In all cases, our experiments show that systematic perturbations can lead to dramatically different interpretations without changing the label. We extend these results to show that interpretations based on exemplars (e.g. influence functions) are similarly susceptible to adversarial attack. Our analysis of the geometry of the Hessian matrix gives insight on why robustness is a general challenge to current interpretation approaches.
translated by 谷歌翻译
With the progress of sensor technology in wearables, the collection and analysis of PPG signals are gaining more interest. Using Machine Learning, the cardiac rhythm corresponding to PPG signals can be used to predict different tasks such as activity recognition, sleep stage detection, or more general health status. However, supervised learning is often limited by the amount of available labeled data, which is typically expensive to obtain. To address this problem, we propose a Self-Supervised Learning (SSL) method with a pretext task of signal reconstruction to learn an informative generalized PPG representation. The performance of the proposed SSL framework is compared with two fully supervised baselines. The results show that in a very limited label data setting (10 samples per class or less), using SSL is beneficial, and a simple classifier trained on SSL-learned representations outperforms fully supervised deep neural networks. However, the results reveal that the SSL-learned representations are too focused on encoding the subjects. Unfortunately, there is high inter-subject variability in the SSL-learned representations, which makes working with this data more challenging when labeled data is scarce. The high inter-subject variability suggests that there is still room for improvements in learning representations. In general, the results suggest that SSL may pave the way for the broader use of machine learning models on PPG data in label-scarce regimes.
translated by 谷歌翻译
最近的研究提出了一系列针对深度任务模型的专业优化算法。通常声称这些多任务优化(MTO)方法产生的解决方案优于仅通过优化任务损失的加权平均值而获得的解决方案。在本文中,我们对各种语言和视觉任务进行大规模实验,以检查这些主张的经验有效性。我们表明,尽管这些算法的设计和计算复杂性增加了,但MTO方法并未产生超出传统优化方法可实现的性能的任何改进。我们强调了替代策略,这些策略始终如一地提高性能概况,并指出可能导致次优效果的常见训练陷阱。最后,我们概述了可靠地评估MTO算法的性能并讨论潜在解决方案的挑战。
translated by 谷歌翻译
我们提出了Zeroeggs,这是一个神经网络框架,用于语音驱动的手势生成,以零拍出样式控制。这意味着即使在训练过程中看不见的运动样式,也只能通过一个简短的运动剪辑来控制样式。我们的模型使用一个变性框架来学习样式嵌入,从而可以通过潜在的空间操纵或样式嵌入方式的混合和缩放来修改样式。我们框架的概率性质进一步使给定输入相同的各种输出的产生,以解决手势运动的随机性质。在一系列实验中,我们首先证明了模型对新的扬声器和样式的灵活性和概括性。然后,在一项用户研究中,我们表明我们的模型在运动,语音适当性和风格刻画方面的自然性,适当性和刻画的表现优于先前的最先进技术。最后,我们释放了包括手指在内的全身手势运动的高质量数据集,语音跨越了19种不同的样式。
translated by 谷歌翻译
本文着重于基于雷达的同时定位和映射(SLAM)中的有效地标管理。必须进行地标管理,以保持相对于平台姿势估计的估计地标的一致地图。当面对从相同地标和/或动态环境的多个检测到地标可以更改的地标和/或动态环境时,此任务尤其重要。雷达数据的另一个挑战是存在错误检测。因此,我们为Radar Slam Landmark Management提出了一个简单而有效的规则解决方案。假设我们的解决方案中有几个步骤:需要检测并包括新的地标,需要识别和删除虚假地标,并且需要维护地图中注册的地标的一致性。为了说明我们的解决方案,我们在包含固定和固定地标的环境中运行扩展的Kalman Filter Slam算法。我们的仿真结果表明,即使面对虚假检测和来自同一地标的多次检测,提出的解决方案也能够可靠地管理地标。
translated by 谷歌翻译
对网络中的用户如何根据邻居的意见更新他们的意见的理解吸引了网络科学领域的极大兴趣,并且越来越多的文献认识到了这个问题的重要性。在这篇研究论文中,我们提出了有指导网络中意见形成的新动态模型。在此模型中,每个节点的意见被更新为邻居意见的加权平均值,而权重代表社会影响力。我们将一种新的中心度度量定义为基于影响和整合性的社会影响度量。我们使用两个意见形成模型来衡量这种新方法:(i)degroot模型和(ii)我们自己提出的模型。先前发表的研究没有考虑合格,并且仅考虑计算社会影响时节点的影响。在我们的定义中,与高度和较低程度的节点相关的较低度和高度的节点具有较高的中心性。作为这项研究的主要贡献,我们提出了一种算法,用于在社交网络中找到一小部分节点,该节点可能会对其他节点的观点产生重大影响。关于现实世界数据的实验表明,所提出的算法显着优于先前发布的最新方法。
translated by 谷歌翻译
关于自适应梯度方法等自适应梯度方法等训练动力的知之甚少。在本文中,我们阐明了这些算法在全批处理和足够大的批处理设置中的行为。具体而言,我们从经验上证明,在全批训练中,预处理的Hessian的最大特征值通常在某个数值下平衡 - 梯度下降算法的稳定性阈值。对于带有步长$ \ eta $和$ \ beta_1 = 0.9 $的Adam,此稳定性阈值为$ 38/\ eta $。在Minibatch培训期间发生了类似的影响,尤其是随着批处理大小的增长。然而,即使自适应方法在``稳定性的自适应边缘''(AEOS)上训练,但它们在该制度中的行为与EOS的非自适应方法的行为有很大不同。 EOS处的非自适应算法被阻止进入损失景观的高曲率区域,而AEOS的自适应梯度方法可以继续前进到高外观区域,同时适应预先调节器以补偿。我们的发现可以成为社区对深度学习中适应性梯度方法的未来理解的基础。
translated by 谷歌翻译